35 research outputs found

    Visualizing classification of natural video sequences using sparse, hierarchical models of cortex.

    Get PDF
    Recent work on hierarchical models of visual cortex has reported state-of-the-art accuracy on whole-scene labeling using natural still imagery. This raises the question of whether the reported accuracy may be due to the sophisticated, non-biological back-end supervised classifiers typically used (support vector machines) and/or the limited number of images used in these experiments. In particular, is the model classifying features from the object or the background? Previous work (Landecker, Brumby, et al., COSYNE 2010) proposed tracing the spatial support of a classifier’s decision back through a hierarchical cortical model to determine which parts of the image contributed to the classification, compared to the positions of objects in the scene. In this way, we can go beyond standard measures of accuracy to provide tools for visualizing and analyzing high-level object classification. We now describe new work exploring the extension of these ideas to detection of objects in video sequences of natural scenes

    Model Cortical Association Fields Account for the Time Course and Dependence on Target Complexity of Human Contour Perception

    Get PDF
    Can lateral connectivity in the primary visual cortex account for the time dependence and intrinsic task difficulty of human contour detection? To answer this question, we created a synthetic image set that prevents sole reliance on either low-level visual features or high-level context for the detection of target objects. Rendered images consist of smoothly varying, globally aligned contour fragments (amoebas) distributed among groups of randomly rotated fragments (clutter). The time course and accuracy of amoeba detection by humans was measured using a two-alternative forced choice protocol with self-reported confidence and variable image presentation time (20-200 ms), followed by an image mask optimized so as to interrupt visual processing. Measured psychometric functions were well fit by sigmoidal functions with exponential time constants of 30-91 ms, depending on amoeba complexity. Key aspects of the psychophysical experiments were accounted for by a computational network model, in which simulated responses across retinotopic arrays of orientation-selective elements were modulated by cortical association fields, represented as multiplicative kernels computed from the differences in pairwise edge statistics between target and distractor images. Comparing the experimental and the computational results suggests that each iteration of the lateral interactions takes at least ms of cortical processing time. Our results provide evidence that cortical association fields between orientation selective elements in early visual areas can account for important temporal and task-dependent aspects of the psychometric curves characterizing human contour perception, with the remaining discrepancies postulated to arise from the influence of higher cortical areas

    Conference on Small Missions For Energetic Astrophysics

    No full text

    Dynamic World training dataset for global land use and land cover categorization of satellite imagery

    No full text
    The Dynamic World Training Data is a dataset of over 5 billion pixels of human-labeled ESA Sentinel-2 satellite image, distributed over 24000 tiles collected from all over the world. The dataset is designed to train and validate automated land use and land cover mapping algorithms. The 10m resolution 5.1km-by-5.1km tiles are densely labeled using a ten category classification schema indicating general land use land cover categories. The dataset was created between 2019-08-01 and 2020-02-28, using satellite imagery observations from 2019, with approximately 10% of observations extending back to 2017 in very cloudy regions of the world. This dataset is a component of the National Geographic Society - Google - World Resources Institute Dynamic World project. The dataset consists of two file types: GeoTIFF files of 510x510 pixel 10m resolution satellite image tiles markup provided by human labelers, and Excel (.xlsx) tables of metadata and class statistics for the above GeoTIFF files. The data is organized into three main folders. One folder contains training data labeled by a team of 25 expert human labelers recruited by National Geographic Society specifically for this project. A second folder contains training data labeled by a larger group of commissioned labelers provided by a commercial crowd-labeler service. The data in these folders is organized by hemisphere and biome number from the RESOLVE Ecoregions2017 biomes categories (https://ecoregions2017.appspot.com/). A third folder contains a validation dataset. This is a holdout set of training data for assessing model accuracy. None of this data is intended to be used in the formulation of the model. Each validation tile was independently labeled by three experts. The validation set contains two versions: the individual markup from each expert labeler, and the image composites of the individual markups. Each GeoTIFF file encodes information on the location of landscape feature classes as determined by a given labeler. Classes were labeled by visual examination of true color (RGB) composites of Sentinel-2 MultiSpectral Level-2A scenes. The Tier 1 class values used in this phase of the project are as follows: 0 No data (left unmarked), 1 Water, 2 Trees, 3 Grass, 4 Flooded Vegetation, 5 Crops, 6 Scrub, 7 Built Area, 8 Bare Ground, 9 Snow/Ice, 10 Cloud. This dataset does not include the original Sentinel-2 imagery tiles, but metadata on the exact image ID and date is provided The original Sentinel-2 imagery was obtained via Google Earth Engine. This data is available under a Creative Commons BY-4.0 license and requires the following attribution: This dataset is produced for the Dynamic World Project by National Geographic Society in partnership with Google and the World Resources Institute. Development of the Dynamic World training data was funded in part by the Gordon and Betty Moore Foundation

    Genetic Programming Approach to Extracting Features From Remotely Sensed Imagery

    Get PDF
    Multi-instrument data sets present an interesting challenge to feature extraction algorithm developers. Beyond the immediate problems of spatial co-registration, the remote sensing scientist must explore a complex algorithm space in which both spatial and spectral signatures may be required to identify a feature of interest. We describe a genetic programming/supervised classifier software system, called Genie, which evolves and combines spatio-spectral image processing tools for remotely sensed imagery. We describe our representation of candidate image processing pipelines, and discuss our set of primitive image operators. Our primary application has been in the field of geospatial feature extraction, including wildfire scars and general land-cover classes, using publicly available multi-spectral imagery (MSI) and hyper-spectral imagery (HSI). Here, we demonstrate our system on Landsat 7 Enhanced Thematic Mapper (ETM+) MSI. We exhibit an evolved pipeline, and discuss its operation and performance
    corecore